1,027 research outputs found

    An Enhanced Visualization of DBT Imaging Using Blind Deconvolution and Total Variation Minimization Regularization

    Get PDF
    Digital Breast Tomosynthesis (DBT) presents out-of-plane artifacts caused by features of high intensity. Given observed data and knowledge about the point spread function (PSF), deconvolution techniques recover data from a blurred version. However, a correct PSF is difficult to achieve and these methods amplify noise. When no information is available about the PSF, blind deconvolution can be used. Additionally, Total Variation (TV) minimization algorithms have achieved great success due to its virtue of preserving edges while reducing image noise. This work presents a novel approach in DBT through the study of out-of-plane artifacts using blind deconvolution and noise regularization based on TV minimization. Gradient information was also included. The methodology was tested using real phantom data and one clinical data set. The results were investigated using conventional 2D slice-by-slice visualization and 3D volume rendering. For the 2D analysis, the artifact spread function (ASF) and Full Width at Half Maximum (FWHMMASF) of the ASF were considered. The 3D quantitative analysis was based on the FWHM of disks profiles at 90°, noise and signal to noise ratio (SNR) at 0° and 90°. A marked visual decrease of the artifact with reductions of FWHMASF (2D) and FWHM90° (volume rendering) of 23.8% and 23.6%, respectively, was observed. Although there was an expected increase in noise level, SNR values were preserved after deconvolution. Regardless of the methodology and visualization approach, the objective of reducing the out-of-plane artifact was accomplished. Both for the phantom and clinical case, the artifact reduction in the z was markedly visible

    3D Generative Model Latent Disentanglement via Local Eigenprojection

    Get PDF
    Designing realistic digital humans is extremely complex. Most data-driven generative models used to simplify the creation of their underlying geometric shape do not offer control over the generation of local shape attributes. In this paper, we overcome this limitation by introducing a novel loss function grounded in spectral geometry and applicable to different neural-network-based generative models of 3D head and body meshes. Encouraging the latent variables of mesh variational autoencoders (VAEs) or generative adversarial networks (GANs) to follow the local eigenprojections of identity attributes, we improve latent disentanglement and properly decouple the attribute creation. Experimental results show that our local eigenprojection disentangled (LED) models not only offer improved disentanglement with respect to the state-of-the-art, but also maintain good generation capabilities with training times comparable to the vanilla implementations of the models. Our code and pre-trained models are available at github.com/simofoti/LocalEigenprojDisentangled

    Zero-shot super-resolution with a physically-motivated downsampling kernel for endomicroscopy

    Get PDF
    Super-resolution (SR) methods have seen significant advances thanks to the development of convolutional neural networks (CNNs). CNNs have been successfully employed to improve the quality of endomicroscopy imaging. Yet, the inherent limitation of research on SR in endomicroscopy remains the lack of ground truth high-resolution (HR) images, commonly used for both supervised training and reference-based image quality assessment (IQA). Therefore, alternative methods, such as unsupervised SR are being explored. To address the need for non-reference image quality improvement, we designed a novel zero-shot super-resolution (ZSSR) approach that relies only on the endomicroscopy data to be processed in a self-supervised manner without the need for ground-truth HR images. We tailored the proposed pipeline to the idiosyncrasies of endomicroscopy by introducing both: a physically-motivated Voronoi downscaling kernel accounting for the endomicroscope’s irregular fibre-based sampling pattern, and realistic noise patterns. We also took advantage of video sequences to exploit a sequence of images for self-supervised zero-shot image quality improvement. We run ablation studies to assess our contribution in regards to the downscaling kernel and noise simulation. We validate our methodology on both synthetic and original data. Synthetic experiments were assessed with reference-based IQA, while our results for original images were evaluated in a user study conducted with both expert and non-expert observers. The results demonstrated superior performance in image quality of ZSSR reconstructions in comparison to the baseline method. The ZSSR is also competitive when compared to supervised single-image SR, especially being the preferred reconstruction technique by experts

    Impact of total variation minimization in volume rendering visualization of breast tomosynthesis data

    Get PDF
    Background and objective: Total Variation (TV) minimization algorithms have achieved great attention due to the virtue of decreasing noise while preserving edges. The purpose of this work is to implement and evaluate two TV minimization methods in 3D. Their performance is analyzed through 3D visualization of digital breast tomosynthesis (DBT) data with volume rendering. Methods: Both filters were studied with real phantom and one clinical DBT data. One algorithm was applied sequentially to all slices and the other was applied to the entire volume at once. The suitable Lagrange multiplier used in each filter equation was studied to reach the minimum 3D TV and the maximum contrast-to-noise ratio (CNR). Imaging blur was measured at 0° and 90° using two disks with different diameters (0.5 mm and 5.0 mm) and equal thickness. The quality of unfiltered and filtered data was analyzed with volume rendering at 0° and 90°. Results: For phantom data, with the sequential filter, a decrease of 25% in 3D TV value and an increase of 19% and 30% in CNR at 0° and 90°, respectively, were observed. When the filter is applied directly in 3D, TV value was reduced by 35% and an increase of 36% was achieved both for CNR at 0° and 90°. For the smaller disk, variations of 0% in width at half maximum (FWHM) at 0° and a decrease of about 2.5% for FWHM at 90° were observed for both filters. For the larger disk, there was a 2.5% increase in FWHM at 0° for both filters and a decrease of 6.28% and 1.69% in FWHM at 90° with the sequential filter and the 3D filter, respectively. When applied to clinical data, the performance of each filter was consistent with that obtained with the phantom. Conclusions: Data analysis confirmed the relevance of these methods in improving quality of DBT images. Additionally, this type of 3D visualization showed that it may play an important complementary role in DBT imaging. It allows to visualize all DBT data at once and to analyze properly filters applied to all the three dimensions

    Learning from irregularly sampled data for endomicroscopy super-resolution: a comparative study of sparse and dense approaches

    Get PDF
    PURPOSE: Probe-based confocal laser endomicroscopy (pCLE) enables performing an optical biopsy via a probe. pCLE probes consist of multiple optical fibres arranged in a bundle, which taken together generate signals in an irregularly sampled pattern. Current pCLE reconstruction is based on interpolating irregular signals onto an over-sampled Cartesian grid, using a naive linear interpolation. It was shown that convolutional neural networks (CNNs) could improve pCLE image quality. Yet classical CNNs may be suboptimal in regard to irregular data. METHODS: We compare pCLE reconstruction and super-resolution (SR) methods taking irregularly sampled or reconstructed pCLE images as input. We also propose to embed a Nadaraya-Watson (NW) kernel regression into the CNN framework as a novel trainable CNN layer. We design deep learning architectures allowing for reconstructing high-quality pCLE images directly from the irregularly sampled input data. We created synthetic sparse pCLE images to evaluate our methodology. RESULTS: The results were validated through an image quality assessment based on a combination of the following metrics: peak signal-to-noise ratio and the structural similarity index. Our analysis indicates that both dense and sparse CNNs outperform the reconstruction method currently used in the clinic. CONCLUSION: The main contributions of our study are a comparison of sparse and dense approach in pCLE image reconstruction. We also implement trainable generalised NW kernel regression as a novel sparse approach. We also generated synthetic data for training pCLE SR

    Registration of Untracked 2D Laparoscopic Ultrasound to CT Images of the Liver using Multi-Labelled Content-Based Image Retrieval

    Get PDF
    Laparoscopic Ultrasound (LUS) is recommended as a standard-of-care when performing laparoscopic liver resections as it images sub-surface structures such as tumours and major vessels. Given that LUS probes are difficult to handle and some tumours are iso-echoic, registration of LUS images to a pre-operative CT has been proposed as an image-guidance method. This registration problem is particularly challenging due to the small field of view of LUS, and usually depends on both a manual initialisation and tracking to compose a volume, hindering clinical translation. In this paper, we extend a proposed registration approach using Content-Based Image Retrieval (CBIR), removing the requirement for tracking or manual initialisation. Pre-operatively, a set of possible LUS planes is simulated from CT and a descriptor generated for each image. Then, a Bayesian framework is employed to estimate the most likely sequence of CT simulations that matches a series of LUS images. We extend our CBIR formulation to use multiple labelled objects and constrain the registration by separating liver vessels into portal vein and hepatic vein branches. The value of this new labeled approach is demonstrated in retrospective data from 5 patients. Results show that, by including a series of 5 untracked images in time, a single LUS image can be registered with accuracies ranging from 5.7 to 16.4 mm with a success rate of 78%. Initialisation of the LUS to CT registration with the proposed framework could potentially enable the clinical translation of these image fusion techniques

    Vessel segmentation for automatic registration of untracked laparoscopic ultrasound to CT of the liver

    Get PDF
    PURPOSE: Registration of Laparoscopic Ultrasound (LUS) to a pre-operative scan such as Computed Tomography (CT) using blood vessel information has been proposed as a method to enable image-guidance for laparoscopic liver resection. Currently, there are solutions for this problem that can potentially enable clinical translation by bypassing the need for a manual initialisation and tracking information. However, no reliable framework for the segmentation of vessels in 2D untracked LUS images has been presented. METHODS: We propose the use of 2D UNet for the segmentation of liver vessels in 2D LUS images. We integrate these results in a previously developed registration method, and show the feasibility of a fully automatic initialisation to the LUS to CT registration problem without a tracking device. RESULTS: We validate our segmentation using LUS data from 6 patients. We test multiple models by placing patient datasets into different combinations of training, testing and hold-out, and obtain mean Dice scores ranging from 0.543 to 0.706. Using these segmentations, we obtain registration accuracies between 6.3 and 16.6 mm in 50% of cases. CONCLUSIONS: We demonstrate the first instance of deep learning (DL) for the segmentation of liver vessels in LUS. Our results show the feasibility of UNet in detecting multiple vessel instances in 2D LUS images, and potentially automating a LUS to CT registration pipeline

    In vivo estimation of target registration errors during augmented reality laparoscopic surgery

    Get PDF
    PURPOSE: Successful use of augmented reality for laparoscopic surgery requires that the surgeon has a thorough understanding of the likely accuracy of any overlay. Whilst the accuracy of such systems can be estimated in the laboratory, it is difficult to extend such methods to the in vivo clinical setting. Herein we describe a novel method that enables the surgeon to estimate in vivo errors during use. We show that the method enables quantitative evaluation of in vivo data gathered with the SmartLiver image guidance system. METHODS: The SmartLiver system utilises an intuitive display to enable the surgeon to compare the positions of landmarks visible in both a projected model and in the live video stream. From this the surgeon can estimate the system accuracy when using the system to locate subsurface targets not visible in the live video. Visible landmarks may be either point or line features. We test the validity of the algorithm using an anatomically representative liver phantom, applying simulated perturbations to achieve clinically realistic overlay errors. We then apply the algorithm to in vivo data. RESULTS: The phantom results show that using projected errors of surface features provides a reliable predictor of subsurface target registration error for a representative human liver shape. Applying the algorithm to in vivo data gathered with the SmartLiver image-guided surgery system shows that the system is capable of accuracies around 12 mm; however, achieving this reliably remains a significant challenge. CONCLUSION: We present an in vivo quantitative evaluation of the SmartLiver image-guided surgery system, together with a validation of the evaluation algorithm. This is the first quantitative in vivo analysis of an augmented reality system for laparoscopic surgery
    corecore